1,431 research outputs found

    Finite element modelling of the inertia friction welding of a CrMoV alloy steel including the effects of solid-state phase transformations

    Get PDF
    Finite element (FE) process modelling of the inertia friction welding (IFW) between two tubular CrMoV components has been carried out using the DEFORM-2D (v10.2) software. This model has been validated against experimental test welds of the material; this included process data such as upset and rotational velocity as well as thermal data collected during the process using embedded thermocouples. The as-welded residual stress from the FE model has been compared to experimental measurements taken on the welded component using the hole drilling technique. The effects of the solid-state phase transformations which occur in the steel are considered and the trends in the residual stress measurements were well replicated when compared to the experimental data

    On sharp bilinear Strichartz estimates of Ozawa-Tsutsumi type

    Get PDF
    We provide a comprehensive analysis of sharp bilinear estimates of Ozawa-Tsutsumi type for solutions u of the free Schr\"odinger equation, which give sharp control on ∣u∣2|u|^2 in classical Sobolev spaces. In particular, we provide a generalization of their estimates in such a way that provides a unification with some sharp bilinear estimates proved by Carneiro and Planchon-Vega, via entirely different methods, by seeing them all as special cases of a one parameter family of sharp estimates. We show that the extremal functions are solutions of the Maxwell-Boltzmann functional equation and provide a new proof that this equation admits only Gaussian solutions. We also make a connection to certain sharp estimates on u2u^2 involving certain dispersive Sobolev norms.Comment: 17 pages, references update

    Prisoner voting for the final general election before release is a solution that balances concerns about democratic rights

    Get PDF
    Democratic Audit has recently featured analysis of prisoner voting rights from several leading experts. In the second of two new contributions to this debate – following Peter Ramsay’s earlier post – Chris Bennett and Daniel Viehoff argue that both sides of the debate can make strong claims to democratic principles. They make a new proposal that aims to balance these competing concerns

    Hardness of Bounded Distance Decoding on Lattices in ?_p Norms

    Get PDF
    Bounded Distance Decoding BDD_{p,?} is the problem of decoding a lattice when the target point is promised to be within an ? factor of the minimum distance of the lattice, in the ?_p norm. We prove that BDD_{p, ?} is NP-hard under randomized reductions where ? ? 1/2 as p ? ? (and for ? = 1/2 when p = ?), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDD_{p,?}. For example, we prove that for all p ? [1,?) ? 2? and constants C > 1, ? > 0, there is no 2^((1-?)n/C)-time algorithm for BDD_{p,?} for some constant ? (which approaches 1/2 as p ? ?), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of BDD_{p,?} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of ? for which the problem is known to be NP-hard for all p > p? ? 4.2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in ?_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018)

    Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes

    Get PDF
    \newcommand{\NP}{\mathsf{NP}}\newcommand{\GapSVP}{\textrm{GapSVP}}We give a simple proof that the (approximate, decisional) Shortest Vector Problem is \NP-hard under a randomized reduction. Specifically, we show that for any p≄1p \geq 1 and any constant Îł<21/p\gamma < 2^{1/p}, the Îł\gamma-approximate problem in the ℓp\ell_p norm (Îł\gamma-\GapSVP_p) is not in RP\mathsf{RP} unless \NP \subseteq \mathsf{RP}. Our proof follows an approach pioneered by Ajtai (STOC 1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing hardness of Îł\gamma-\GapSVP_p using locally dense lattices. We construct such lattices simply by applying "Construction A" to Reed-Solomon codes with suitable parameters, and prove their local density via an elementary argument originally used in the context of Craig lattices. As in all known \NP-hardness results for \GapSVP_p with p<∞p < \infty, our reduction uses randomness. Indeed, it is a notorious open problem to prove \NP-hardness via a deterministic reduction. To this end, we additionally discuss potential directions and associated challenges for derandomizing our reduction. In particular, we show that a close deterministic analogue of our local density construction would improve on the state-of-the-art explicit Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and IEEE Trans. Inf. Theory 2006). As a related contribution of independent interest, we also give a polynomial-time algorithm for decoding nn-dimensional "Construction A Reed-Solomon lattices" (with different parameters than those used in our hardness proof) to a distance within an O(log⁥n)O(\sqrt{\log n}) factor of Minkowski's bound. This asymptotically matches the best known distance for decoding near Minkowski's bound, due to Mook and Peikert (IEEE Trans. Inf. Theory 2022), whose work we build on with a somewhat simpler construction and analysis

    Comparing choice models of river health improvement for the Goulburn River

    Get PDF
    The extent of the benefits of improved river health remain uncertain. Quantifying these benefits is useful in prioritising policy investments. This study uses the Choice Modelling technique to estimate the value that households attach to attributes of improved river health. Data from a choice modelling survey supported by DSE Victoria are employed to elicit household preferences in a case study of the Goulburn River. Results from conditional and nested logit model specifications indicate that respondents hold positive values for higher levels of fish and bird populations and for increasing riverside vegetation. The standard Hausman test for Independence-from-Irrelevant-Alternatives (IIA) assumptions violations is found to give inconsistent results. The value estimates of the conditional and nested logit models are shown to be statistically similar indicating that testing for IIA violation may be more complicated than currently assumed thus raising questions about the efficacy of the more complex nested logit model.Cheap talk, choice modelling, Mekong River Delta, wetland values, willingness to pay, Resource /Energy Economics and Policy, Q25,
    • 

    corecore